Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
We investigate solution methods for large-scale inverse problems governed by partial differential equations (PDEs) via Bayesian inference. The Bayesian framework provides a statistical setting to infer uncertain parameters from noisy measurements. To quantify posterior uncertainty, we adopt Markov Chain Monte Carlo (MCMC) approaches for generating samples. To increase the efficiency of these approaches in high-dimension, we make use of local information about gradient and Hessian of the target potential, also via Hamiltonian Monte Carlo (HMC). Our target application is inferring the field of soil permeability processing observations of pore pressure, using a nonlinear PDE poromechanics model for predicting pressure from permeability. We compare the performance of different sampling approaches in this and other settings. We also investigate the effect of dimensionality and non-gaussianity of distributions on the performance of different sampling methods.more » « less
-
null (Ed.)Optimal exploration of engineering systems can be guided by the principle of Value of Information (VoI), which accounts for the topological important of components, their reliability and the management costs. For series systems, in most cases higher inspection priority should be given to unreliable components. For redundant systems such as parallel systems, analysis of one-shot decision problems shows that higher inspection priority should be given to more reliable components. This paper investigates the optimal exploration of redundant systems in long-term decision making with sequential inspection and repairing. When the expected, cumulated, discounted cost is considered, it may become more efficient to give higher inspection priority to less reliable components, in order to preserve system redundancy. To investigate this problem, we develop a Partially Observable Markov Decision Process (POMDP) framework for sequential inspection and maintenance of redundant systems, where the VoI analysis is embedded in the optimal selection of exploratory actions. We investigate the use of alternative approximate POMDP solvers for parallel and more general systems, compare their computation complexities and performance, and show how the inspection priorities depend on the economic discount factor, the degradation rate, the inspection precision, and the repair cost.more » « less
-
null (Ed.)The operation and maintenance of infrastructure components and systems can be modeled as a Markov process, partially or fully observable. Information about the current condition can be summarized by the “inner” state of a finite state controller. When a control policy is assigned, the stochastic evolution of the system is completely described by a Markov transition function. This article applies finite state Markov chain analyses to identify relevant features of the time evolution of a controlled system. We focus on assessing if some critical conditions are reachable (or if some actions will ever be taken), in identifying the probability of these critical events occurring within a time period, their expected time of occurrence, their long-term frequency, and the probability that some events occur before others. We present analytical methods based on linear algebra to address these questions, discuss their computational complexity and the structure of the solution. The analyses can be performed after a policy is selected for a Markov decision process (MDP) or a partially observable MDP. Their outcomes depend on the selected policy and examining these outcomes can provide the decision makers with deeper understanding of the consequences of following that policy, and may also suggest revising it.more » « less
-
The Value of Information (VoI) assesses the impact of data in a decision process. A risk-neutral agent, quantifying the VoI in monetary terms, prefers to collect data only if their VoI surpasses the cost to collect them. For an agent acting without external constraints, data have non-negative VoI (as free “information cannot hurt”) and those with an almost-negligible potential effect on the agent's belief have an almost-negligible VoI. However, these intuitive properties do not hold true for an agent acting under external constraints related to epistemic quantities, such as those posed by some regulations. For example, a manager forced to repair an asset when its probability of failure is too high can prefer to avoid collecting free information about the actual condition of the asset, and even to pay in order to avoid this, or she can assign a high VoI to almost-irrelevant data. Hence, by enforcing epistemic constraints in the regulations, the policy-maker can induce a range of counter-intuitive, but rational, behaviors, from information avoidance to over-evaluation of barely relevant information, in the agents obeying the regulations. This paper illustrates how the structural properties of VoI change depending on such external epistemic constraints, and discusses how incentives and penalties can alleviate these induced attitudes toward information.more » « less
An official website of the United States government
